32 research outputs found

    On embodied memetic evolution and the emergence of behavioural traditions in Robots

    Get PDF
    This paper describes ideas and initial experiments in embodied imitation using e-puck robots, developed as part of a project whose aim is to demonstrate the emergence of artificial culture in collective robot systems. Imitated behaviours (memes) will undergo variation because of the noise and heterogeneities of the robots and their sensors. Robots can select which memes to enact, and-because we have a multi-robot collective-memes are able to undergo multiple cycles of imitation, with inherited characteristics. We thus have the three evolutionary operators: variation, selection and inheritance, and-as we describe in this paper-experimental trials show that we are able to demonstrate embodied movement-meme evolution. © 2011 Springer-Verlag

    Chemotaxis Based Virtual Fence for Swarm Robots in Unbounded Environments

    Get PDF
    This paper presents a novel swarm robotics application of chemotaxis behaviour observed in microorganisms. This approach was used to cause exploration robots to return to a work area around the swarm’s nest within a boundless environment. We investigate the performance of our algorithm through extensive simulation studies and hardware validation. Results show that the chemotaxis approach is effective for keeping the swarm close to both stationary and moving nests. Performance comparison of these results with the unrealistic case where a boundary wall was used to keep the swarm within a target search area showed that our chemotaxis approach produced competitive results

    Empowerment or Engagement? Digital Health Technologies for Mental Healthcare

    Get PDF
    We argue that while digital health technologies (e.g. artificial intelligence, smartphones, and virtual reality) present significant opportunities for improving the delivery of healthcare, key concepts that are used to evaluate and understand their impact can obscure significant ethical issues related to patient engagement and experience. Specifically, we focus on the concept of empowerment and ask whether it is adequate for addressing some significant ethical concerns that relate to digital health technologies for mental healthcare. We frame these concerns using five key ethical principles for AI ethics (i.e. autonomy, beneficence, non-maleficence, justice, and explicability), which have their roots in the bioethical literature, in order to critically evaluate the role that digital health technologies will have in the future of digital healthcare

    Towards Exogenous Fault Detection in Swarm Robotic Systems

    No full text
    It has long been assumed that swarm systems are robust, in the sense that the failure of individual robots will have little detrimental effect on a swarm's overall collective behaviour. However, Bjerknes and Winfield [1] have recently shown that this is not always the case, particularly in the event of partial failures (such as motor failure). The reliability modelling in [1] shows that overall system reliability rapidly decreases with swarm size, therefore this is a problem that cannot simply be solved by adding more robots to the swarm. Instead, future large-scale swarm systems will need an active approach to dealing with failed individuals if they are to achieve a high level of fault tolerance.</p

    Human-robot relationships and the development of responsible social robots

    No full text
    The contemporary development of social robots has been accompanied by concerns over their capacity to cause harm to humans. Our RoboTIPS study sets out to design and trial an innovative design feature that will advance the safe operation of social robots and foster societal trust. The Ethical Black Box (EBB) collects data about a robot's actions in real time and in context; when an incident occurs, this data can be used within a wider investigation process to determine what went wrong and prevent similar adverse events. In this paper we draw on Lucy Suchman's groundbreaking work on human-machine relationships to elucidate the goals, practices and potential impact of our study. We align with Suchman's positioning of safety as an accomplishment of situated action and draw on her analysis to describe the actions of the EBB-enhanced social robot as contingent on context and the robot's status as a social agent. We also describe shared priorities in our methodological approaches. We close with observations on how participatory design and an ethnomethodologically-informed stance towards data collection and analysis can contribute to the field of responsible innovation (RI), which seeks to ensure that innovations are undertaken in the public interest and provide societal value

    Human-robot relationships and the development of responsible social robots

    No full text
    The contemporary development of social robots has been accompanied by concerns over their capacity to cause harm to humans. Our RoboTIPS study sets out to design and trial an innovative design feature that will advance the safe operation of social robots and foster societal trust. The Ethical Black Box (EBB) collects data about a robot's actions in real time and in context; when an incident occurs, this data can be used within a wider investigation process to determine what went wrong and prevent similar adverse events. In this paper we draw on Lucy Suchman's groundbreaking work on human-machine relationships to elucidate the goals, practices and potential impact of our study. We align with Suchman's positioning of safety as an accomplishment of situated action and draw on her analysis to describe the actions of the EBB-enhanced social robot as contingent on context and the robot's status as a social agent. We also describe shared priorities in our methodological approaches. We close with observations on how participatory design and an ethnomethodologically-informed stance towards data collection and analysis can contribute to the field of responsible innovation (RI), which seeks to ensure that innovations are undertaken in the public interest and provide societal value

    Robot accident investigation: a case study in responsible robotics

    No full text
    Robot accidents are inevitable. Although rare, they have been happening since assembly line robots were first introduced in the 1960s. But a new generation of social robots is now becoming commonplace. Equipped with sophisticated embedded artificial intelligence (AI), social robots might be deployed as care robots to assist elderly or disabled people to live independently. Smart robot toys offer a compelling interactive play experience for children, and increasingly capable autonomous vehicles (AVs) offer the promise of hands-free personal transport and fully autonomous taxis. Unlike industrial robots, which are deployed in safety cages, social robots are designed to operate in human environments and interact closely with humans; the likelihood of robot accidents is therefore much greater for social robots than industrial robots. This chapter sets out a draft framework for social robot accident investigation, a framework that proposes both the technology and processes that would allow social robot accidents to be investigated with no less rigour than we expect of air or rail accident investigations. The chapter also places accident investigation within the practice of responsible robotics and makes the case that social robotics without accident investigation would be no less irresponsible than aviation without air accident investigation

    Marine Robotics Competitions: a Survey

    No full text
    corecore